In mathematics and physics, the Magnus expansion, named after Wilhelm Magnus (1907–1990), provides an exponential representation of the solution of a first order linear homogeneous differential equation for a linear operator. In particular it furnishes the fundamental matrix of a system of linear ordinary differential equations of order with varying coefficients. The exponent is built up as an infinite series whose terms involve multiple integrals and nested commutators.
Contents |
Given the n × n coefficient matrix A(t) we want to solve the initial value problem associated with the linear ordinary differential equation
for the unknown n-dimensional vector function Y(t).
When n = 1, the solution reads
This is still valid for n > 1 if the matrix A(t) satisfies for any pair of values of t, t1 and t2. In particular, this is the case if the matrix is constant. In the general case, however, the expression above is no longer the solution of the problem.
The approach proposed by Magnus to solve the matrix initial value problem is to express the solution by means of the exponential of a certain n × n matrix function ,
which is subsequently constructed as a series expansion,
where, for the sake of simplicity, it is customary to write down for and to take t0 = 0. The equation above constitutes the Magnus expansion or Magnus series for the solution of matrix linear initial value problem.
The first four terms of this series read
where is the matrix commutator of A and B.
These equations may be interpreted as follows: coincides exactly with the exponent in the scalar (n = 1) case, but this equation cannot give the whole solution. If one insists in having an exponential representation the exponent has to be corrected. The rest of the Magnus series provides that correction.
In applications one can rarely sum exactly the Magnus series and has to truncate it to get approximate solutions. The main advantage of the Magnus proposal is that, very often, the truncated series still shares with the exact solution important qualitative properties, at variance with other conventional perturbation theories. For instance, in classical mechanics the symplectic character of the time evolution is preserved at every order of approximation. Similarly the unitary character of the time evolution operator in quantum mechanics is also preserved (in contrast to the Dyson series).
From a mathematical point of view, the convergence problem is the following: given a certain matrix , when can the exponent be obtained as the sum of the Magnus series? A sufficient condition for this series to converge for is
where denotes a matrix norm. This result is generic, in the sense that one may consider specific matrices for which the series diverges for any .
It is possible to design a recursive procedure to generate all the terms in the Magnus expansion. Specifically, with the matrices defined recursively through
one has
Here is a shorthand for an iterated commutator,
and are the Bernoulli numbers.
When this recursion is worked out explicitly, it is possible to express as a linear combination of -fold integrals of nested commutators containing matrices ,
an expression that becomes increasingly intricate with .
Since the 1960s, the Magnus expansion has been successfully applied as a perturbative tool in numerous areas of physics and chemistry, from atomic and molecular physics to nuclear magnetic resonance and quantum electrodynamics. It has been also used since 1998 as a tool to construct practical algorithms for the numerical integration of matrix linear differential equations. As they inherit from the Magnus expansion the preservation of qualitative traits of the problem, the corresponding schemes are prototypical examples of geometric numerical integrators.